Workshop held at RSA 2024
https://github.com/aibom-workshop/rsa-2024
AI-BOM Workshop at RSA Conference 2024
AI software supply chain security is the bedrock of ensuring the integrity, authenticity, and resilience of AI systems throughout their lifecycle. AI-BOM, or AI Bill of Materials, is crucial for software supply chain security as it provides a comprehensive inventory of components within an AI system and properties of its security operations, MLSECOPS. AI-BOMs enable proactive measures to enhance security, mitigate threats, and maintain the integrity of AI systems. AI-BOM serves as a foundational tool for fostering trust, accountability, and resilience in the AI chain ecosystem.
Welcome to an enlightening afternoon at the AI-BOM Workshop, conveniently timed during the RSAC 2024. This exclusive workshop delves into the critical realm of AI software supply chain security. Expert speakers will illuminate key facets including AI-BOM and AI software supply chain security. Engage in collaborative discussions alongside industry leaders, shaping best practices and charting the path forward. With concluding remarks from a notable US government official from CISA.gov, this workshop ensures a comprehensive exploration of strategies to secure AI landscapes across all industries.
Table of Contents
(Scroll for references and notes from the workshop.)
Agenda
Doors open at 12:30 PM PDT.
Event details: AIBOM-workshop-RSAC2024
Sponsored by SAP, Security Architecture Podcast, Manifest, and Cybeats.
Recording is published to the YouTube channel https://www.youtube.com/@SoftwareSupplyChainSecurity .
Link to the lightning talks video: youtu.be
Presentations for the talks are also uploaded in this repo.
Notes for this workshop (as marked in columns below) were generated by ChatGPT from the recording transcript.
Time | Topic | Speaker | Notes |
---|---|---|---|
13:00 | Opening remarks | Niall P. Brennan, VP, Global Head of Strategic Engagement, Government and External Security Partners, SAP Global Security and Compliance (SGSC) | Welcomed participants and thanked sponsors (SAP, Security Architecture Podcast, Cybeats, and Manifest). |
13:10 | Lightning talks: on-going efforts on AIBOM in the community | ||
"What's Inside There? Model Metadata and Metrics for AI/ML BoMs" | Diana Kelley and Sam Washko (10 min) | Introduced the concept of AI BOM and its similarities to traditional Software BOMs. Emphasized transparency and the importance of understanding models and datasets used in ML models. Key metadata fields included model type, source, training data, etc. | |
Recap on on-going workstreams on AI supply chain security from CycloneDX and SPDX | Steve Springett (5 min) and Helen Oakley (5 min) | Presented Cyclone DX's and SPDX's current and future support for AI BOMs. Steve introduced Cyclone DX's Blueprints, attestation, and environmental considerations. Helen highlighted SPDX 3.0 features like AI and Data Profiles. | |
"AI Risk Assessment through Threat Modeling and use cases for AIBOM automation" | Helen Oakley (10 min) | Discussed integrating AI BOMs into threat modeling. Identified challenges and opportunities in automating threat modeling for AI risks. | |
"The State of AIBOMs: use cases, contents, regulations, and tools" | Daniel Bardenstein (10 min) | Explored the current state of AI BOMs, emphasizing regulatory frameworks and compliance needs. Introduced mapping AI BOM fields to SBOM standards like Cyclone DX and SPDX. | |
"Understanding vulnerabilities and weaknesses of AI" | Dmitry Raidman (10 min) | Shared challenges in identifying models, datasets, and versions. Introduced the concept of ML Vulnerabilities (MLVs) to differentiate them from traditional CVEs. | |
"AI Policy and Software Supply Chain: transparency and security for managing suppliers, services and product" | Nicholas Vidovich (10 min) | Highlighted the role of transparency in software supply chain security. Emphasized policy considerations for suppliers, services, and products involving AI. | |
"The role of AI BOMs in providing the transparency necessary to foster the safety and security of AI and our Critical Infrastructure" | Alex Sharpe | Introduced the concept of "Nutrition Labels" for AI BOMs. Discussed key considerations like service delivery models, modality, and use cases. | |
14:20 | Break | ||
14:35 | Structured group discussion (details below) | Key Challenges & Opportunities Identified: 1) Traceability and explainability in AI models. 2) Defining standardised fields across multiple frameworks. 3) Automating AI BOM generation. Next Steps: 1) Document and publish findings on GitHub. 2) Build Community: Continue discussions in the AI BOM Slack community. Form an AI BOM Tiger Team under the CISA SBOM Initiative. 3) Research & Development: Develop practical solutions for data provenance and traceability. Explore integration with existing SBOM standards. | |
15:35 | Closing remarks by Allan Friedman, Senior Advisor and Strategist at CISA | Allan Friedman, Senior Advisor and Strategist at CISA (5 min) | Proposed properties included model name, version, hash, framework, training datasets, etc. Focused on fields easiest to automate. Participants were encouraged to join the Tiger Team and continue collaborating on solutions for AI BOMs. Register for CISA SBOM Tiger Team: AIBOM here. |
15:40 | Networking |
Structured Group Discussion
Attendees will divide into groups, each focusing on a specific topic. The initial 5 minutes will be dedicated to introductions, during which groups will select their representative. Subsequently, each group will delve into a 10-minute discussion on their topic. Afterward, each group will present their ideas and potential hurdles in a brief 2-3 minute pitch to the rest of the participants. The remaining time will be utilized for collective brainstorming to discuss furthermore the challenges and best practices. The result from this discussion will be documented on this page.
# | Topic | Description | Notes |
---|---|---|---|
1 | What fields should (and should not be) in an AIBOM? | This topic explores the essential and non-essential components to include in an AIBOM, such as model weights and visualizations of model performance. (Examples: model weights, visualizations of model performance) | Discussed essential components such as model name, version, framework, training datasets, etc. Considered whether model weights should be included due to size. Also discussed model metrics and hashes. |
2 | Minimum elements of AIBOM | Discusses the minimum set of elements required in an AIBOM to ensure comprehensive coverage and functionality. | |
3 | Collection of data for AIBOM properties | Examines the process of collecting and managing data for AIBOM properties. (Example: data about training) | Challenges in collecting data for AIBOM properties were discussed, including author credibility, data provenance, and validation. Emphasized the need for standardised methods of data collection. |
4 | standardised framework for AI dev & DevOps (MLOps) | Explores the development and DevOps practices necessary to establish a standardised framework for AI. (Example: model versioning) | |
5 | AI “Risks” and “Vulnerabilities” (as it pertains to fields in the AIBOM) | Analyzes the risks and vulnerabilities associated with AI systems, specifically in relation to the AIBOM fields. | Discussed the differentiation between AI vulnerabilities (MLVs) and traditional CVEs. Explored the challenges of identifying AI-related risks. Highlighted the importance of signing mechanisms for data provenance. |
6 | Creating or identifying infrastructure for AI risks | Discusses the establishment or identification of infrastructure similar to NVD, CVE, CVSS, EPSS, and KEV for managing AI-related risks. | Emphasized the extension of the existing CVE-based system to include ML vulnerabilities. Proposed creating standardised scoring systems and establishing CNAs for ML risks. |
7 | AIBOM use cases (including biz operations & risk management) | Explores various use cases of AIBOM in business operations and risk management to demonstrate its practical applications. | Identified use cases including medical devices, malware detection, vulnerability management, and transparency. Highlighted compliance needs around bias, fairness, and accountability. |
What's next?
Check out the examples of AIBOMs within this repo.
Register for CISA SBOM Tiger Team: AIBOM here.
Stay tuned for future engagements on AIBOM (forums, events, workshops)!
"This is not the end, this is only the beginning. Secure your AI software supply chain today, because innovation thrives where trust and protection meet!" ~Helen Oakley ;-)